5 research outputs found

    Elimination of quotients in various localisations of premodels into models

    Full text link
    The contribution of this article is quadruple. It (1) unifies various schemes of premodels/models including situations such as presheaves/sheaves, sheaves/flabby sheaves, prespectra/Ω\Omega-spectra, simplicial topological spaces/(complete) Segal spaces, pre-localised rings/localised rings, functors in categories/strong stacks and, to some extent, functors from a limit sketch to a model category versus the homotopical models for the limit sketch; (2) provides a general construction from the premodels to the models; (3) proposes technics that allows one to assess the nature of the universal properties associated with this construction; (4) shows that the obtained localisation admits a particular presentation, which organises the structural and relational information into bundles of data. This presentation is obtained via a process called an elimination of quotients and its aim is to facilitate the handling of the relational information appearing in the construction of higher dimensional objects such as weak (ω,n)(\omega,n)-categories, weak ω\omega-groupoids and higher moduli stacks.Comment: The text is the same as in v6; this version contains corrections to the published MDPI paper, the main reason for this change is that the diagram of Proposition 3.1 was meant to be a 3 dimensional diagram (while only the front face appeared in the published paper). The wording of some sentences and the diagram of Example 6.42 are changed accordingly. A typo in the table of Ex. 6.42 is correcte

    Backprop as Functor: A compositional perspective on supervised learning

    Full text link
    A supervised learning algorithm searches over a set of functions ABA \to B parametrised by a space PP to find the best approximation to some ideal function f ⁣:ABf\colon A \to B. It does this by taking examples (a,f(a))A×B(a,f(a)) \in A\times B, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent---with respect to a fixed step size and an error function satisfying a certain property---defines a monoidal functor from a category of parametrised functions to this category of update rules. This provides a structural perspective on backpropagation, as well as a broad generalisation of neural networks.Comment: 13 pages + 4 page appendi

    SKETCHES IN HIGHER CATEGORY THEORY

    No full text
    corecore